User-generated-content (UGC) videos have dominated the Internet during recent years. While many methods attempt to objectively assess the quality of these UGC videos, the mechanisms of human quality perception in the UGC-VQA problem is still yet to be explored. To better explain the quality perception mechanisms and learn more robust representations, we aim to disentangle the effects of aesthetic quality issues and technical quality issues risen by the complicated video generation processes in the UGC-VQA problem. To overcome the absence of respective supervisions during disentanglement, we propose the Limited View Biased Supervisions (LVBS) scheme where two separate evaluators are trained with decomposed views specifically designed for each issue. Composed of an Aesthetic Quality Evaluator (AQE) and a Technical Quality Evaluator (TQE) under the LVBS scheme, the proposed Disentangled Objective Video Quality Evaluator (DOVER) reach excellent performance (0.91 SRCC for KoNViD-1k, 0.89 SRCC for LSVQ, 0.88 SRCC for YouTube-UGC) in the UGC-VQA problem. More importantly, our blind subjective studies prove that the separate evaluators in DOVER can effectively match human perception on respective disentangled quality issues. Codes and demos are released in https://github.com/teowu/dover.
translated by 谷歌翻译
人搜索是一项具有挑战性的任务,旨在实现共同的行人检测和人重新识别(REID)。以前的作品在完全和弱监督的设置下取得了重大进步。但是,现有方法忽略了人搜索模型的概括能力。在本文中,我们采取了进一步的步骤和现在的域自适应人员搜索(DAPS),该搜索旨在将模型从标记的源域概括为未标记的目标域。在这种新环境下出现了两个主要挑战:一个是如何同时解决检测和重新ID任务的域未对准问题,另一个是如何在目标域上训练REID子任务而不可靠的检测结果。为了应对这些挑战,我们提出了一个强大的基线框架,并使用两个专用设计。 1)我们设计一个域对齐模块,包括图像级和任务敏感的实例级别对齐,以最大程度地减少域差异。 2)我们通过动态聚类策略充分利用未标记的数据,并使用伪边界框来支持目标域上的REID和检测训练。通过上述设计,我们的框架在MAP中获得了34.7%的地图,而PRW数据集的TOP-1则达到80.6%,超过了直接转移基线的大幅度。令人惊讶的是,我们无监督的DAPS模型的性能甚至超过了一些完全和弱监督的方法。该代码可在https://github.com/caposerenity/daps上找到。
translated by 谷歌翻译
随着非专家们拍摄的野外视频的快速增长,盲目视频质量评估(VQA)已成为一个具有挑战性且苛刻的问题。尽管已经做出了许多努力来解决这个问题,但尚不清楚人类视觉系统(HVS)与视频的时间质量有何关系。同时,最近的工作发现,自然视频的框架变成了HV的感知领域,往往会形成表示形式的直线轨迹。通过获得的洞察力,即失真会损害感知的视频质量并导致感知表示的弯曲轨迹,我们提出了一个时间感知质量指数(TPQI),以通过描述表示形式的图形形态来测量时间失真。具体而言,我们首先从HVS的横向基因核(LGN)和主要视觉区域(V1)中提取视频感知表示,然后测量其轨迹的直率和紧凑性,以量化视频的自然性和内容连续性的降解。实验表明,HVS中的感知表示是一种预测主观时间质量的有效方法,因此TPQI首次可以实现与空间质量度量的可比性能,并且在评估具有较大时间变化的视频方面更加有效。我们进一步证明,通过与NIQE(空间质量指标)结合使用,TPQI可以在流行的野外视频数据集中实现最佳性能。更重要的是,除了要评估的视频之外,TPQI不需要任何其他信息,因此可以将其应用于任何数据集,而无需参数调整。源代码可在https://github.com/uolmm/tpqi-vqa上找到。
translated by 谷歌翻译
使用现代智能手机摄像机的夜成像由于光子计数低和成像系统中不可避免的噪声而变得麻烦。直接调整曝光时间和ISO等级在弱光条件下无法同时获得锋利和无噪声图像。尽管已经提出了许多方法来增强嘈杂或模糊的夜晚图像,但由于两个主要原因,它们在现实世界中的照片仍然不令人满意:1)单个图像中的信息有限和2)合成训练图像和真实图像之间的域间隙 - 世界照片(例如,模糊区域和分辨率的差异)。为了利用连续的长期和短曝光图像中的信息,我们提出了一条基于学习的管道来融合它们。开发了D2HNET框架,以通过在短期曝光图像的指导下脱毛和增强长期暴露图像来恢复高质量的图像。为了缩小域间隙,我们利用了两相deblernet-enhancenet架构,该体系结构在固定的低分辨率上执行准确的模糊去除,以便能够在不同的分辨率输入中处理大范围模糊。此外,我们从HD视频中合成了D2数据,并在其上进行了实验。验证集和真实照片的结果表明,我们的方法获得了更好的视觉质量和最先进的定量分数。可以在https://github.com/zhaoyuzhi/d2hnet上找到D2HNET代码,模型和D2-DATASET。
translated by 谷歌翻译
当前的深度视频质量评估(VQA)方法通常在评估高分辨率视频时具有高计算成本。这使他们无法通过端到端培训学习更好的视频质量相关表示。现有方法通常考虑幼稚的采样以降低计算成本,例如调整大小和裁剪。但是,它们显然在视频中损坏了与质量相关的信息,因此并不是学习VQA的良好表示形式的最佳选择。因此,渴望为VQA设计一种新的质量保留抽样方案。在本文中,我们提出了网格迷你斑点采样(GMS),该采样允许通过在原始分辨率下采样贴片来考虑局部质量,并通过以统一网格采样的迷你绘制来涵盖全球质量。这些迷你斑点是剪接和对齐的,称为片段。我们进一步构建了专门设计的碎片注意网络(粉丝),以适应碎片作为输入。由片段和粉丝组成,VQA(快速VQA)提出的片段样品变压器可实现有效的端到端深VQA,并学习有效的与视频质量相关的表示。它可以提高最新准确性约10%,同时减少1080p高分辨率视频的99.5%的失败。新学习的与视频质量相关的表示形式也可以转移到较小的VQA数据集中,从而在这些情况下提高性能。广泛的实验表明,Fast-VQA在各种分辨率的输入方面具有良好的性能,同时保持高效率。我们在https://github.com/timothyhtimothy/fast-vqa上发布代码。
translated by 谷歌翻译
在现有作品中,框架及其对视频质量评估(VQA)的影响之间的时间关系仍然不足。这些关系导致视频质量的两种重要效果类型。首先,某些时间变化(例如摇动,闪烁和突然的场景过渡)会导致时间扭曲并导致额外的质量降解,而其他变化(例如,与有意义的事件相关的变化)却没有。其次,人类视觉系统通常对具有不同内容的框架有不同的关注,从而导致其对整体视频质量的重要性不同。基于变压器的突出时间序列建模能力,我们提出了一种新颖有效的基于变压器的VQA方法来解决这两个问题。为了更好地区分时间变化,从而捕获了时间变形,我们设计了一个基于变压器的时空扭曲提取(STDE)模块。为了解决时间质量的关注,我们提出了类似编码器的时间含量变压器(TCT)。我们还介绍了功能上的时间抽样,以减少TCT的输入长度,以提高该模块的学习效率和效率。由STDE和TCT组成,用于视频质量评估(DISCOVQA)的拟议的时间失真符合变压器(DISCOVQA)在几个VQA基准上达到了最新的性能,而无需任何额外的预训练数据集,多达10%的概括能力提高了10%比现有方法。我们还进行了广泛的消融实验,以证明我们提出的模型中每个部分的有效性,并提供可视化以证明所提出的模块实现了我们对这些时间问题进行建模的意图。我们将在以后发布我们的代码和预算权重。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Learning feature interactions is the key to success for the large-scale CTR prediction and recommendation. In practice, handcrafted feature engineering usually requires exhaustive searching. In order to reduce the high cost of human efforts in feature engineering, researchers propose several deep neural networks (DNN)-based approaches to learn the feature interactions in an end-to-end fashion. However, existing methods either do not learn both vector-wise interactions and bit-wise interactions simultaneously, or fail to combine them in a controllable manner. In this paper, we propose a new model, xDeepInt, based on a novel network architecture called polynomial interaction network (PIN) which learns higher-order vector-wise interactions recursively. By integrating subspace-crossing mechanism, we enable xDeepInt to balance the mixture of vector-wise and bit-wise feature interactions at a bounded order. Based on the network architecture, we customize a combined optimization strategy to conduct feature selection and interaction selection. We implement the proposed model and evaluate the model performance on three real-world datasets. Our experiment results demonstrate the efficacy and effectiveness of xDeepInt over state-of-the-art models. We open-source the TensorFlow implementation of xDeepInt: https://github.com/yanyachen/xDeepInt.
translated by 谷歌翻译
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译